222 research outputs found

    Fostering energy-awareness in scientific cloud users

    Get PDF
    © 2014 IEEE.Academic cloud infrastructures are constructed and maintained so they minimally constrain their users. Since they are free and do not limit usage patterns, academics developed such behavior that jeopardizes fair and flexible resource provisioning. For efficiency, related work either explicitly limits user access to resources, or introduces automatic rationing techniques. Surprisingly, the root cause (i.e., the user behavior) is disregarded by these approaches. This paper compares academic cloud user behavior to its commercial equivalent. We deduce, that academics should behave like commercial cloud users to relieve resource provisioning. To encourage this behavior, we propose an architectural extension to academic infrastructure clouds. We evaluate our extension via a simulation using real life academic resource request traces. We show a potential resource usage reduction while maintaining the unlimited nature of academic clouds

    Fostering energy-awareness in simulations behind scientific workflow management systems

    Get PDF
    © 2014 IEEE.Scientific workflow management systems face a new challenge in the era of cloud computing. The past availability of rich information regarding the state of the used infrastructures is gone. Thus, organising virtual infrastructures so that they not only support the workflow being executed, but also optimise for several service level objectives (e.g., Maximum energy consumption limit, cost, reliability, availability) become dependent on good infrastructure modelling and prediction techniques. While simulators have been successfully used in the past to aid research on such workflow management systems, the currently available cloud related simulation toolkits suffer form several issues (e.g., Scalability, narrow scope) that hinder their applicability. To address this need, this paper introduces techniques for unifying two existing simulation toolkits by first analysing the problems with the current simulators, and then by illustrating the problems faced by workflow systems through the example of the ASKALON environment. Finally, we show how the unification of the selected simulators improve on the the discussed problems

    An architecture to stimulate behavioral development of academic cloud users

    Get PDF
    Academic cloud infrastructures are constructed and maintained so they minimally constrain their users. Since they are free and do not limit usage patterns, academics developed such behavior that jeopardizes fair and flexible resource provisioning. For efficiency, related work either explicitly limits user access to resources, or introduce automatic rationing techniques. Surprisingly, the root cause (i.e., the user behavior) is disregarded by these approaches. This article compares academic cloud user behavior to its commercial equivalent. We deduce, that academics should behave like commercial cloud users to relieve resource provisioning. To encourage commercial like behavior, we propose an architectural extension to existing academic infrastructure clouds. First, every user's energy consumption and efficiency is monitored. Then, energy efficiency based leader boards are used to ignite competition between academics and reveal their worst practices. Leader boards are not sufficient to completely change user behavior. Thus, we introduce engaging options that encourage academics to delay resource requests and prefer resources more suitable for the infrastructure's internal provisioning. Finally, we evaluate our extensions via a simulation using real life academic resource request traces. We show a potential resource utilization reduction (by the factor of at most 2.6) while maintaining the unlimited nature of academic clouds. © 2014 Elsevier Inc

    Service-oriented production grids and user support

    Get PDF
    Currently several production Grids offer their resources for academic communities. These Grids are resource-oriented Grids with minimal user support. The existing user support incorporates Grid portals without workflow editing and execution capabilities, brokering with no QoS and SLA management, security solutions without privacy and trust management, etc. They do not provide any kind of support for running legacy code applications on Grids. Production Grids started the migration from resource-oriented Grids to service-oriented ones. The migration defines additional requirements towards the user support. These requirements include solving interoperability among Grids, automatic service deployment, dynamic user management, legacy code support, QoA and SLA-based brokering, etc. This paper discusses some aspects of the user support needed for service-oriented production Grids

    Automatic deployment of interoperable legacy code services

    Get PDF
    The Grid Execution Management for Legacy Code Architecture (GEMLCA) enables exposing legacy applications as Grid services without re-engineering the code, or even requiring access to the source files. The integration of current GT3 and GT4 based GEMLCA implementations with the P-GRADE Grid portal allows the creation, execution and visualisation of complex Grid workflows composed of legacy and nonlegacy components. However, the deployment of legacy codes and mapping their execution to Grid resources is currently done manually. This paper outlines how GEMLCA can be extended with automatic service deployment, brokering, and information system support. A conceptual architecture for an Automatic Deployment Service (ADS) and for an x-Service Interoperability Layer (XSILA) are introduced explaining how these mechanisms support desired features in future releases of GEMLCA

    LAYSI: A layered approach for SLA-violation propagation in self-manageable cloud infrastructures

    Get PDF
    Cloud computing represents a promising comput ing paradigm where computing resources have to be allocated to software for their execution. Self-manageable Cloud in frastructures are required to achieve that level of flexibility on one hand, and to comply to users' requirements speci fied by means of Service Level Agreements (SLAs) on the other. Such infrastructures should automatically respond to changing component, workload, and environmental conditions minimizing user interactions with the system and preventing violations of agreed SLAs. However, identification of sources responsible for the possible SLA violation and the decision about the reactive actions necessary to prevent SLA violation is far from trivial. First, in this paper we present a novel approach for mapping low-level resource metrics to SLA parameters necessary for the identification of failure sources. Second, we devise a layered Cloud architecture for the bottom-up propagation of failures to the layer, which can react to sensed SLA violation threats. Moreover, we present a communication model for the propagation of SLA violation threats to the appropriate layer of the Cloud infrastructure, which includes negotiators, brokers, and automatic service deployer. © 2010 IEEE

    Facilitating self-adaptable inter-cloud management

    Get PDF
    Cloud Computing infrastructures have been developed as individual islands, and mostly proprietary solutions so far. However, as more and more infrastructure providers apply the technology, users face the inevitable question of using multiple infrastructures in parallel. Federated cloud management systems offer a simplified use of these infrastructures by hiding their proprietary solutions. As the infrastructure becomes more complex underneath these systems, the situations (like system failures, handling of load peaks and slopes) that users cannot easily handle, occur more and more frequently. Therefore, federations need to manage these situations autonomously without user interactions. This paper introduces a methodology to autonomously operate cloud federations by controlling their behavior with the help of knowledge management systems. Such systems do not only suggest reactive actions to comply with established Service Level Agreements (SLA) between provider and consumer, but they also find a balance between the fulfillment of established SLAs and resource consumption. The paper adopts rule-based techniques as its knowledge management solution and provides an extensible rule set for federated clouds built on top of multiple infrastructures. © 2012 IEEE

    Legacy code support for production grids

    Get PDF
    In order to improve reliability and to deal with the high complexity of existing middleware solutions, today's production Grid systems restrict the services to be deployed on their resources. On the other hand end-users require a wide range of value added services to fully utilize these resources. This paper describes a solution how legacy code support is offered as third party service for production Grids. The introduced solution, based on the Grid Execution Management for Legacy Code Architecture (GEMLCA), do not require the deployment of additional applications on the Grid resources, or any extra effort from Grid system administrators. The implemented solution was successfully connected to and demonstrated on the UK National Grid Service. © 2005 IEEE

    Distributed Environment for Efficient Virtual Machine Image Management in Federated Cloud Architectures

    Get PDF
    The use of Virtual Machines (VM) in Cloud computing provides various benefits in the overall software engineering lifecycle. These include efficient elasticity mechanisms resulting in higher resource utilization and lower operational costs. VM as software artifacts are created using provider-specific templates, called VM images (VMI), and are stored in proprietary or public repositories for further use. However, some technology specific choices can limit the interoperability among various Cloud providers and bundle the VMIs with nonessential or redundant software packages, leading to increased storage size, prolonged VMI delivery, stagnant VMI instantiation and ultimately vendor lock-in. To address these challenges, we present a set of novel functionalities and design approaches for efficient operation of distributed VMI repositories, specifically tailored for enabling: (i) simplified creation of lightweight and size optimized VMIs tuned for specific application requirements; (ii) multi-objective VMI repository optimization; and (iii) efficient reasoning mechanism to help optimizing complex VMI operations. The evaluation results confirm that the presented approaches can enable VMI size reduction by up to 55%, while trimming the image creation time by 66%. Furthermore, the repository optimization algorithms, can reduce the VMI delivery time by up to 51% and cut down the storage expenses by 3%. Moreover, by implementing replication strategies, the optimization algorithms can increase the system reliability by 74%
    • …
    corecore